Skip to content

[libc++] refactor cxx_atomic_wait to make it reusable for atomic_ref #81427

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 2, 2024

Conversation

huixie90
Copy link
Contributor

@huixie90 huixie90 commented Feb 11, 2024

The goal of this patch is to make atomic's wait functions to be reusable by atomic_ref.
#76647

First, this patch is built on top of #80596 , to reduce the future merge conflicts.

This patch made the following functions as "API"s to be used by atomic, atomic_flag, semaphore, latch, atomic_ref

__atomic_wait
__atomic_wait_unless
__atomic_notify_one
__atomic_notify_all

These functions are made generic to support atomic type and atomic_ref. There are two customisation points.

// How to load the value from the given type (with a memory order)
__atomic_load_cpo
// what is the contention address that the platform `wait` function is going to monitor
__atomic_contention_address_cpo

I use "tag_invoke" for implementing customisation points as it is easier for derived (atomic) -> base (atomic_base) conversions if we need to.

I have implemented these two CPOs for atomic_base and atomic_flag. They both use the "atomic abstraction layer" cxx_atomic_impl.

For atomic_ref (not implemented in this patch), the load and address function will be different, because

  • it does not use the "atomic abstraction layer" so the load operation will be some gcc builtin
  • the contention address will be the user's actual type that the atomic_ref is pointing to

@huixie90 huixie90 marked this pull request as ready for review February 11, 2024 20:40
@huixie90 huixie90 requested a review from a team as a code owner February 11, 2024 20:40
@llvmbot llvmbot added the libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi. label Feb 11, 2024
@llvmbot
Copy link
Member

llvmbot commented Feb 11, 2024

@llvm/pr-subscribers-libcxx

Author: Hui (huixie90)

Changes

The goal of this patch is to make atomic's wait functions to be reusable by atomic_ref.
#76647

First, this patch is built on top of #80596 , to reduce the future merge conflicts.

This patch made the following functions as "API"s to be used by atomic, atomic_flag, semaphore, latch, atomic_ref

__atomic_wait
__atomic_wait_unless
__atomic_notify_one
__atomic_notify_all

These functions are made generic to support atomic type and atomic_ref. There are two customisation points.

__atomic_load_cpo
  • How to load the value from the given type (with a memory order)
__atomic_contention_address_cpo
  • what is the contention address that the platform wait function is going to monitor

I have implemented these two CPOs for atomic_base and atomic_flag. They both uses the "atomic abstraction layer" cxx_atomic_impl.

For atomic_ref (not implemented in this patch), the load and address function will be different, because

  • it does not use the "atomic abstraction layer" so the load operation will be some gcc builtin
  • the contention address will be the user's actual type that the atomic_ref is pointing to

Full diff: https://github.com/llvm/llvm-project/pull/81427.diff

5 Files Affected:

  • (modified) libcxx/include/__atomic/atomic_base.h (+26-10)
  • (modified) libcxx/include/__atomic/atomic_flag.h (+23-2)
  • (modified) libcxx/include/__atomic/atomic_sync.h (+133-43)
  • (modified) libcxx/include/latch (+9-2)
  • (modified) libcxx/include/semaphore (+2-11)
diff --git a/libcxx/include/__atomic/atomic_base.h b/libcxx/include/__atomic/atomic_base.h
index 3ad3b562c59805..b7eebcb7651aea 100644
--- a/libcxx/include/__atomic/atomic_base.h
+++ b/libcxx/include/__atomic/atomic_base.h
@@ -102,26 +102,42 @@ struct __atomic_base // false
     return std::__cxx_atomic_compare_exchange_strong(std::addressof(__a_), std::addressof(__e), __d, __m, __m);
   }
 
+  friend _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI _Tp
+  __tag_invoke(__atomic_load_cpo, const __atomic_base& __this, memory_order __order) {
+    return __this.load(__order);
+  }
+
+  friend _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI _Tp
+  __tag_invoke(__atomic_load_cpo, const volatile __atomic_base& __this, memory_order __order) {
+    return __this.load(__order);
+  }
+
+  friend _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI __cxx_atomic_impl<_Tp> const*
+  __tag_invoke(__atomic_contention_address_cpo, const __atomic_base& __this) {
+    return std::addressof(__this.__a_);
+  }
+
+  friend _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI __cxx_atomic_impl<_Tp> const volatile*
+  __tag_invoke(__atomic_contention_address_cpo, const volatile __atomic_base& __this) {
+    return std::addressof(__this.__a_);
+  }
+
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void wait(_Tp __v, memory_order __m = memory_order_seq_cst) const
       volatile _NOEXCEPT {
-    std::__cxx_atomic_wait(std::addressof(__a_), __v, __m);
+    std::__atomic_wait(*this, __v, __m);
   }
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void
   wait(_Tp __v, memory_order __m = memory_order_seq_cst) const _NOEXCEPT {
-    std::__cxx_atomic_wait(std::addressof(__a_), __v, __m);
+    std::__atomic_wait(*this, __v, __m);
   }
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void notify_one() volatile _NOEXCEPT {
-    std::__cxx_atomic_notify_one(std::addressof(__a_));
-  }
-  _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void notify_one() _NOEXCEPT {
-    std::__cxx_atomic_notify_one(std::addressof(__a_));
+    std::__atomic_notify_one(*this);
   }
+  _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void notify_one() _NOEXCEPT { std::__atomic_notify_one(*this); }
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void notify_all() volatile _NOEXCEPT {
-    std::__cxx_atomic_notify_all(std::addressof(__a_));
-  }
-  _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void notify_all() _NOEXCEPT {
-    std::__cxx_atomic_notify_all(std::addressof(__a_));
+    std::__atomic_notify_all(*this);
   }
+  _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void notify_all() _NOEXCEPT { std::__atomic_notify_all(*this); }
 
 #if _LIBCPP_STD_VER >= 20
   _LIBCPP_HIDE_FROM_ABI constexpr __atomic_base() noexcept(is_nothrow_default_constructible_v<_Tp>) : __a_(_Tp()) {}
diff --git a/libcxx/include/__atomic/atomic_flag.h b/libcxx/include/__atomic/atomic_flag.h
index a45a7183547726..f66caf7d1f5217 100644
--- a/libcxx/include/__atomic/atomic_flag.h
+++ b/libcxx/include/__atomic/atomic_flag.h
@@ -15,6 +15,7 @@
 #include <__atomic/memory_order.h>
 #include <__chrono/duration.h>
 #include <__config>
+#include <__memory/addressof.h>
 #include <__thread/support.h>
 #include <cstdint>
 
@@ -47,13 +48,33 @@ struct atomic_flag {
     __cxx_atomic_store(&__a_, _LIBCPP_ATOMIC_FLAG_TYPE(false), __m);
   }
 
+  friend _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI _LIBCPP_ATOMIC_FLAG_TYPE
+  __tag_invoke(__atomic_load_cpo, const atomic_flag& __this, memory_order __order) {
+    return std::__cxx_atomic_load(&__this.__a_, __order);
+  }
+
+  friend _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI _LIBCPP_ATOMIC_FLAG_TYPE
+  __tag_invoke(__atomic_load_cpo, const volatile atomic_flag& __this, memory_order __order) {
+    return std::__cxx_atomic_load(&__this.__a_, __order);
+  }
+
+  friend _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI __cxx_atomic_impl<_LIBCPP_ATOMIC_FLAG_TYPE> const*
+  __tag_invoke(__atomic_contention_address_cpo, const atomic_flag& __this) {
+    return std::addressof(__this.__a_);
+  }
+
+  friend _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI __cxx_atomic_impl<_LIBCPP_ATOMIC_FLAG_TYPE> const volatile*
+  __tag_invoke(__atomic_contention_address_cpo, const volatile atomic_flag& __this) {
+    return std::addressof(__this.__a_);
+  }
+
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void wait(bool __v, memory_order __m = memory_order_seq_cst) const
       volatile _NOEXCEPT {
-    __cxx_atomic_wait(&__a_, _LIBCPP_ATOMIC_FLAG_TYPE(__v), __m);
+    std::__atomic_wait(*this, _LIBCPP_ATOMIC_FLAG_TYPE(__v), __m);
   }
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void
   wait(bool __v, memory_order __m = memory_order_seq_cst) const _NOEXCEPT {
-    __cxx_atomic_wait(&__a_, _LIBCPP_ATOMIC_FLAG_TYPE(__v), __m);
+    std::__atomic_wait(*this, _LIBCPP_ATOMIC_FLAG_TYPE(__v), __m);
   }
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void notify_one() volatile _NOEXCEPT {
     __cxx_atomic_notify_one(&__a_);
diff --git a/libcxx/include/__atomic/atomic_sync.h b/libcxx/include/__atomic/atomic_sync.h
index e1994ddde86c19..9adcbaa133ca1b 100644
--- a/libcxx/include/__atomic/atomic_sync.h
+++ b/libcxx/include/__atomic/atomic_sync.h
@@ -18,7 +18,10 @@
 #include <__memory/addressof.h>
 #include <__thread/poll_with_backoff.h>
 #include <__thread/support.h>
+#include <__type_traits/conjunction.h>
 #include <__type_traits/decay.h>
+#include <__type_traits/invoke.h>
+#include <__utility/declval.h>
 #include <cstring>
 
 #if !defined(_LIBCPP_HAS_NO_PRAGMA_SYSTEM_HEADER)
@@ -27,6 +30,60 @@
 
 _LIBCPP_BEGIN_NAMESPACE_STD
 
+// The customisation points to enable the following functions:
+// - __atomic_wait
+// - __atomic_wait_unless
+// - __atomic_notify_one
+// - __atomic_notify_all
+// Note that std::atomic<T>::wait was back-ported to C++03
+// there the below implementations look ugly to support C++03
+
+// NOLINTBEGIN(libcpp-robust-against-adl)
+struct __atomic_load_cpo {
+  template <class _Tp>
+  using _Ret = decltype(__tag_invoke(
+      std::declval<__atomic_load_cpo>(), std::declval<const _Tp&>(), std::declval<memory_order>()));
+
+  template <class _Tp>
+  _LIBCPP_HIDE_FROM_ABI _Ret<_Tp> operator()(const _Tp& __t, memory_order __order) const _NOEXCEPT {
+    return __tag_invoke(*this, __t, __order);
+  }
+};
+// TODO: if we can deprecate std::atomic<T>::wait before c++17, we could add
+// inline constexpr __atomic_load_cpo __atomic_load{};
+
+struct __atomic_contention_address_cpo {
+  template <class _Tp>
+  using _Ret = decltype(__tag_invoke(std::declval<__atomic_contention_address_cpo>(), std::declval<const _Tp&>()));
+
+  template <class _Tp>
+  _LIBCPP_HIDE_FROM_ABI _Ret<_Tp> operator()(const _Tp& __t) const _NOEXCEPT {
+    return __tag_invoke(*this, __t);
+  }
+};
+// TODO: if we can deprecate std::atomic<T>::wait before c++17, we could add
+// inline constexpr __atomic_contention_address_cpo __atomic_contention_address{};
+
+// NOLINTEND(libcpp-robust-against-adl)
+
+template <class _Tp>
+using __atomic_waitable =
+    _And<__invokable<__atomic_load_cpo, const _Tp&, memory_order>,
+         __invokable<__atomic_contention_address_cpo, const _Tp&> >;
+
+template <class _AtomicWaitable, class _Poll>
+struct __atomic_wait_poll_impl {
+  const _AtomicWaitable& __a_;
+  _Poll __poll_;
+  memory_order __order_;
+
+  _LIBCPP_HIDE_FROM_ABI bool operator()() const {
+    __atomic_load_cpo __atomic_load = {};
+    auto __current_val              = __atomic_load(__a_, __order_);
+    return __poll_(__current_val);
+  }
+};
+
 #ifndef _LIBCPP_HAS_NO_THREADS
 
 _LIBCPP_AVAILABILITY_SYNC _LIBCPP_EXPORTED_FROM_ABI void __cxx_atomic_notify_one(void const volatile*);
@@ -43,17 +100,42 @@ __libcpp_atomic_monitor(__cxx_atomic_contention_t const volatile*);
 _LIBCPP_AVAILABILITY_SYNC _LIBCPP_EXPORTED_FROM_ABI void
 __libcpp_atomic_wait(__cxx_atomic_contention_t const volatile*, __cxx_contention_t);
 
-template <class _Atp, class _BackoffTest>
-struct __libcpp_atomic_wait_backoff_impl {
-  _Atp* __a_;
-  _BackoffTest __backoff_test_;
+template <class _AtomicWaitable, class _Poll>
+struct __atomic_wait_backoff_impl {
+  const _AtomicWaitable& __a_;
+  _Poll __poll_;
+  memory_order __order_;
+
+  _LIBCPP_AVAILABILITY_SYNC
+  _LIBCPP_HIDE_FROM_ABI bool
+  __update_monitor_val_and_poll(__cxx_atomic_contention_t const volatile*, __cxx_contention_t& __monitor_val) const {
+    // In case the contention type happens to be __cxx_atomic_contention_t, i.e. __cxx_atomic_impl<int64_t>,
+    // the platform wait is directly monitoring the atomic value itself.
+    __atomic_load_cpo __atomic_load = {};
+    __monitor_val                   = __atomic_load(__a_, __order_);
+    return __poll_(__monitor_val);
+  }
+
+  _LIBCPP_AVAILABILITY_SYNC
+  _LIBCPP_HIDE_FROM_ABI bool
+  __update_monitor_val_and_poll(void const volatile* __contention_address, __cxx_contention_t& __monitor_val) const {
+    // In case the contention type is anything else, platform wait is monitoring a __cxx_atomic_contention_t
+    // from the global pool, the monitor comes from __libcpp_atomic_monitor
+    __monitor_val                   = std::__libcpp_atomic_monitor(__contention_address);
+    __atomic_load_cpo __atomic_load = {};
+    auto __current_val              = __atomic_load(__a_, __order_);
+    return __poll_(__current_val);
+  }
+
   _LIBCPP_AVAILABILITY_SYNC
   _LIBCPP_HIDE_FROM_ABI bool operator()(chrono::nanoseconds __elapsed) const {
     if (__elapsed > chrono::microseconds(64)) {
-      auto __monitor = std::__libcpp_atomic_monitor(__a_);
-      if (__backoff_test_(__monitor))
+      __atomic_contention_address_cpo __atomic_contention_address = {};
+      auto __contention_address                                   = __atomic_contention_address(__a_);
+      __cxx_contention_t __monitor_val;
+      if (__update_monitor_val_and_poll(__contention_address, __monitor_val))
         return true;
-      std::__libcpp_atomic_wait(__a_, __monitor);
+      std::__libcpp_atomic_wait(__contention_address, __monitor_val);
     } else if (__elapsed > chrono::microseconds(4))
       __libcpp_thread_yield();
     else {
@@ -62,37 +144,46 @@ struct __libcpp_atomic_wait_backoff_impl {
   }
 };
 
-template <class _Atp, class _Poll, class _BackoffTest>
-_LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI bool
-__cxx_atomic_wait(_Atp* __a, _Poll&& __poll, _BackoffTest&& __backoff_test) {
-  __libcpp_atomic_wait_backoff_impl<_Atp, __decay_t<_BackoffTest> > __backoff_fn = {__a, __backoff_test};
-  return std::__libcpp_thread_poll_with_backoff(__poll, __backoff_fn);
+template <class _AtomicWaitable, class _Poll>
+_LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void
+__atomic_wait_unless(const _AtomicWaitable& __a, _Poll&& __poll, memory_order __order) {
+  static_assert(__atomic_waitable<_AtomicWaitable>::value, "");
+  __atomic_wait_poll_impl<_AtomicWaitable, __decay_t<_Poll> > __poll_impl     = {__a, __poll, __order};
+  __atomic_wait_backoff_impl<_AtomicWaitable, __decay_t<_Poll> > __backoff_fn = {__a, __poll, __order};
+  std::__libcpp_thread_poll_with_backoff(__poll_impl, __backoff_fn);
 }
 
-template <class _Poll>
-struct __libcpp_atomic_wait_poll_as_backoff_test {
-  _Poll __poll_;
-
-  _LIBCPP_AVAILABILITY_SYNC
-  _LIBCPP_HIDE_FROM_ABI bool operator()(__cxx_contention_t&) const { return __poll_(); }
-};
+template <class _AtomicWaitable>
+_LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void __atomic_notify_one(const _AtomicWaitable& __a) {
+  static_assert(__atomic_waitable<_AtomicWaitable>::value, "");
+  __atomic_contention_address_cpo __atomic_contention_address = {};
+  std::__cxx_atomic_notify_one(__atomic_contention_address(__a));
+}
 
-template <class _Atp, class _Poll>
-_LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI bool __cxx_atomic_wait(_Atp* __a, _Poll&& __poll) {
-  __libcpp_atomic_wait_backoff_impl<_Atp, __libcpp_atomic_wait_poll_as_backoff_test<_Poll&> > __backoff_fn = {
-      __a, {__poll}};
-  return std::__libcpp_thread_poll_with_backoff(__poll, __backoff_fn);
+template <class _AtomicWaitable>
+_LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void __atomic_notify_all(const _AtomicWaitable& __a) {
+  static_assert(__atomic_waitable<_AtomicWaitable>::value, "");
+  __atomic_contention_address_cpo __atomic_contention_address = {};
+  std::__cxx_atomic_notify_all(__atomic_contention_address(__a));
 }
 
 #else // _LIBCPP_HAS_NO_THREADS
 
-template <class _Tp>
-_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_notify_all(__cxx_atomic_impl<_Tp> const volatile*) {}
-template <class _Tp>
-_LIBCPP_HIDE_FROM_ABI void __cxx_atomic_notify_one(__cxx_atomic_impl<_Tp> const volatile*) {}
-template <class _Atp, class _Fn>
-_LIBCPP_HIDE_FROM_ABI bool __cxx_atomic_wait(_Atp*, _Fn&& __test_fn) {
-  return std::__libcpp_thread_poll_with_backoff(__test_fn, __spinning_backoff_policy());
+template <class _AtomicWaitable, class _Poll>
+_LIBCPP_HIDE_FROM_ABI void __atomic_wait_unless(const _AtomicWaitable& __a, _Poll&& __poll, memory_order __order) {
+  static_assert(__atomic_waitable<_AtomicWaitable>::value, "");
+  __atomic_wait_poll_impl<_AtomicWaitable, __decay_t<_Poll> > __poll_fn = {__a, __poll, __order};
+  return std::__libcpp_thread_poll_with_backoff(__poll_fn, __spinning_backoff_policy());
+}
+
+template <class _AtomicWaitable>
+_LIBCPP_HIDE_FROM_ABI void __atomic_notify_one(const _AtomicWaitable&) {
+  static_assert(__atomic_waitable<_AtomicWaitable>::value, "");
+}
+
+template <class _AtomicWaitable>
+_LIBCPP_HIDE_FROM_ABI void __atomic_notify_all(const _AtomicWaitable&) {
+  static_assert(__atomic_waitable<_AtomicWaitable>::value, "");
 }
 
 #endif // _LIBCPP_HAS_NO_THREADS
@@ -102,21 +193,20 @@ _LIBCPP_HIDE_FROM_ABI bool __cxx_nonatomic_compare_equal(_Tp const& __lhs, _Tp c
   return std::memcmp(std::addressof(__lhs), std::addressof(__rhs), sizeof(_Tp)) == 0;
 }
 
-template <class _Atp, class _Tp>
-struct __cxx_atomic_wait_test_fn_impl {
-  _Atp* __a;
-  _Tp __val;
-  memory_order __order;
-  _LIBCPP_HIDE_FROM_ABI bool operator()() const {
-    return !std::__cxx_nonatomic_compare_equal(std::__cxx_atomic_load(__a, __order), __val);
+template <class _Tp>
+struct __bind_nonatomic_equal {
+  _Tp __val_;
+  _LIBCPP_HIDE_FROM_ABI bool operator()(const _Tp& __arg) const {
+    return !std::__cxx_nonatomic_compare_equal(__arg, __val_);
   }
 };
 
-template <class _Atp, class _Tp>
-_LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI bool
-__cxx_atomic_wait(_Atp* __a, _Tp const __val, memory_order __order) {
-  __cxx_atomic_wait_test_fn_impl<_Atp, _Tp> __test_fn = {__a, __val, __order};
-  return std::__cxx_atomic_wait(__a, __test_fn);
+template <class _AtomicWaitable, class _Up>
+_LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void
+__atomic_wait(_AtomicWaitable& __a, _Up __val, memory_order __order) {
+  static_assert(__atomic_waitable<_AtomicWaitable>::value, "");
+  __bind_nonatomic_equal<_Up> __nonatomic_equal = {__val};
+  std::__atomic_wait_unless(__a, __nonatomic_equal, __order);
 }
 
 _LIBCPP_END_NAMESPACE_STD
diff --git a/libcxx/include/latch b/libcxx/include/latch
index ad7b35579913b3..684244baaaf1e1 100644
--- a/libcxx/include/latch
+++ b/libcxx/include/latch
@@ -97,9 +97,13 @@ public:
     if (__old == __update)
       __a_.notify_all();
   }
-  inline _LIBCPP_HIDE_FROM_ABI bool try_wait() const noexcept { return 0 == __a_.load(memory_order_acquire); }
+  inline _LIBCPP_HIDE_FROM_ABI bool try_wait() const noexcept {
+    auto __value = __a_.load(memory_order_acquire);
+    return try_wait_impl(__value);
+  }
   inline _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void wait() const {
-    __cxx_atomic_wait(&__a_.__a_, [this]() -> bool { return try_wait(); });
+    std::__atomic_wait_unless(
+        __a_, [this](ptrdiff_t& __value) -> bool { return try_wait_impl(__value); }, memory_order_acquire);
   }
   inline _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void arrive_and_wait(ptrdiff_t __update = 1) {
     _LIBCPP_ASSERT_ARGUMENT_WITHIN_DOMAIN(__update >= 0, "latch::arrive_and_wait called with a negative value");
@@ -108,6 +112,9 @@ public:
     count_down(__update);
     wait();
   }
+
+private:
+  _LIBCPP_HIDE_FROM_ABI bool try_wait_impl(ptrdiff_t& __value) const noexcept { return __value == 0; }
 };
 
 _LIBCPP_END_NAMESPACE_STD
diff --git a/libcxx/include/semaphore b/libcxx/include/semaphore
index 5235d720bf6fee..a49a888d907227 100644
--- a/libcxx/include/semaphore
+++ b/libcxx/include/semaphore
@@ -100,17 +100,8 @@ public:
     }
   }
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI void acquire() {
-    auto const __poll_fn = [this]() -> bool {
-      auto __old = __a_.load(memory_order_relaxed);
-      return (__old != 0) && __a_.compare_exchange_strong(__old, __old - 1, memory_order_acquire, memory_order_relaxed);
-    };
-    auto const __backoff_test = [this](__cxx_contention_t& __monitor) -> bool {
-      ptrdiff_t __old = __monitor;
-      bool __r        = __try_acquire_impl(__old);
-      __monitor       = static_cast<__cxx_contention_t>(__old);
-      return __r;
-    };
-    __cxx_atomic_wait(&__a_.__a_, __poll_fn, __backoff_test);
+    std::__atomic_wait_unless(
+        __a_, [this](ptrdiff_t& __old) { return __try_acquire_impl(__old); }, memory_order_relaxed);
   }
   template <class _Rep, class _Period>
   _LIBCPP_AVAILABILITY_SYNC _LIBCPP_HIDE_FROM_ABI bool

Copy link
Member

@ldionne ldionne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this basically LGTM but I would like to see this rebased after #80596 lands.

@huixie90 huixie90 force-pushed the refactor_atomic_wait branch from fc0403a to b65e704 Compare February 19, 2024 16:00
Copy link
Member

@ldionne ldionne left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with comments. Thanks a lot, I think this is a pretty significant improvement in readability and it will unblock atomic_ref.

@huixie90 huixie90 force-pushed the refactor_atomic_wait branch from bad159c to c3e4d3a Compare March 1, 2024 19:56
@huixie90 huixie90 merged commit 1f613bc into llvm:main Mar 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
libc++ libc++ C++ Standard Library. Not GNU libstdc++. Not libc++abi.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants